803 research outputs found

    Corner symmetry and quantum geometry

    Full text link
    By virtue of the Noether theorems, the vast gauge redundancy of general relativity provides us with a rich algebra of boundary charges that generate physical symmetries. These charges are located at codimension-2 entangling surfaces called corners. The presence of non-trivial corner symmetries associated with any entangling cut provides stringent constraints on the theory's mathematical structure and a guide through quantization. This report reviews new and recent results for non-perturbative quantum gravity, which are natural consequences of this structure. First, we establish that the corner symmetry derived from the gauge principle encodes quantum entanglement across internal boundaries. We also explain how the quantum representation of the corner symmetry algebra provides us with a notion of quantum geometry. We then focus our discussion on the first-order formulation of gravity and show how many results obtained in the continuum connect naturally with previous results in loop quantum gravity. In particular, we show that it is possible to get, purely from quantization and without discretization, an area operator with discrete spectrum, which is covariant under local Lorentz symmetry. We emphasize that while loop gravity correctly captures some of the gravitational quantum numbers, it does not capture all of them, which points towards important directions for future developments. Finally, we discuss the understanding of the gravitational dynamics along null surfaces as a conservation of symmetry charges associated with a Carrollian fluid.Comment: 29 pages. Revised version taking into account comments by the referees. This is a preprint of a chapter to appear in the "Handbook of Quantum Gravity", edited by Cosimo Bambi, Leonardo Modesto and Ilya Shapiro, 2023, Springe

    A Deep Learning Approach for Burned Area Segmentation with Sentinel-2 Data

    Get PDF
    Wildfires have major ecological, social and economic consequences. Information about the extent of burned areas is essential to assess these consequences and can be derived from remote sensing data. Over the last years, several methods have been developed to segment burned areas with satellite imagery. However, these methods mostly require extensive preprocessing, while deep learning techniques - which have successfully been applied to other segmentation tasks - have yet to be fully explored. In this work, we combine sensor-specific and methodological developments from the past few years and suggest an automatic processing chain, based on deep learning, for burned area segmentation using mono-temporal Sentinel-2 imagery. In particular, we created a new training and validation dataset, which is used to train a convolutional neural network based on a U-Net architecture. We performed several tests on the input data and reached optimal network performance using the spectral bands of the visual, near infrared and shortwave infrared domains. The final segmentation model achieved an overall accuracy of 0.98 and a kappa coefficient of 0.94

    Multi-sensor cloud and cloud shadow segmentation with a convolutional neural network

    Get PDF
    Cloud and cloud shadow segmentation is a crucial pre-processing step for any application that uses multi-spectral satellite images. In particular, disaster related applications (e.g., flood monitoring or rapid damage mapping), which are highly time- and data-critical, require methods that produce accurate cloud and cloud shadow masks in short time while being able to adapt to large variations in the target domain (induced by atmospheric conditions, different sensors, scene properties, etc.). In this study, we propose a data-driven approach to semantic segmentation of cloud and cloud shadow in single date images based on a modified U-Net convolutional neural network that aims to fulfil these requirements. We train the network on a global database of Landsat OLI images for the segmentation of five classes (shadow, cloud, water, land and snow/ice). We compare the results to state-of-the-art methods, proof the models generalization ability across multiple satellite sensors (Landsat TM, Landsat ETM+, Landsat OLI and Sentinel-2) and show the influence of different training strategies and spectral band combinations on the performance of the segmentation. Our method consistently outperforms Fmask and a traditional Random Forest classifier on a globally distributed multi-sensor test dataset in terms of accuracy, Cohens Kappa coefficient, Dice coefficient and inference speed. The results indicate that a reduced feature space composed solely of red, green, blue and near-infrared bands already produces good results for all tested sensors. If available, adding shortwave-infrared bands can increase the accuracy. Contrast and brightness augmentations of the training data further improve the segmentation performance. The best performing U-Net model achieves an accuracy of 0.89, Kappa of 0.82 and Dice coefficient of 0.85, while running the inference over 896 test image tiles with 44.8 seconds/megapixel (2.8 seconds/megapixel on GPU). The Random Forest classifier reaches an accuracy of 0.79, Kappa of 0.65 and Dice coefficient of 0.74 with 3.9 seconds/megapixel inference time (on CPU) on the same training and testing data. The rule-based Fmask method takes significantly longer (277.8 seconds/megapixel) and produces results with an accuracy of 0.75, Kappa of 0.60 and Dice coefficient of 0.72

    UKIS-CSMASK: A Python package for multi-sensor cloud and cloud-shadow segmentation

    Get PDF
    Cloud and cloud shadow segmentation is a crucial pre-processing step for any application that uses multi-spectral satellite images. In particular, time-critical disaster applications, require accurate and immediate cloud and cloud shadow masks while being able to adapt to possibly large variations caused by different sensor characteristics, scene properties or atmospheric conditions. This study introduces the newly developed open-source Python package ukis-csmask for cloud and cloud shadow segmentation in multi-spectral satellite images. Segmentation with ukis-csmask is performed with a pre-trained Convolutional Neural Network based on a U-Net architecture. It works directly on Level-1C data, eliminating the need for prior atmospheric correction. Images need to be in top of atmosphere reflectance and include at least the Blue, Green, Red, NIR, SWIR1 and SWIR2 spectral bands. We provide a performance evaluation on a recent benchmark dataset for cloud and cloud shadow segmentation and proof the generalization ability of our method across multiple satellites (Landsat-5, Landsat-7, Landsat-8, Landsat-9 and Sentinel-2). We also show the influence of augmentation and image bands on the segmentation performance and compare it to the widely used Fmask algorithm and a Random Forest classifier. Compared to previous work in this direction, our study focuses on multi-sensor generalization ability, simplicity and efficiency and provides a ready-to-use software package that has been thoroughly tested

    Improving reliability in flood mapping by generating a global seasonal reference water mask using Sentinel-1/2 time-series data

    Get PDF
    Variable intra-annual climatic and hydrologic conditions result in many regions of the world in a strong seasonality of the water extent throughout the year. This behaviour, however, is usually not reflected in satellite-based flood emergency mapping. This may lead to non-reliable representations of the flood extent and to misleading information within disaster management activities. In order to be able to separate flooding from normally present seasonal water coverage, up-to-date, high-resolution information on the seasonal water cover is crucial. In this work, we present an automatic methodology to generate a global and consistent permanent and seasonal reference water product based on high resolution Earth Observation data, specifically designed for the use within flood mapping activities. The water masks are primarily based on the time-series analysis of optical Sentinel-2 imagery, which are complemented by Sentinel-1 Synthetic Aperture Radar-based information in data scarce regions. The methodology has been developed based on data of five globally distributed study areas (Australia, Germany, India, Mozambique, and Sudan). Within this work results for Australia and India are demonstrated and are systematically compared with external reference water products. Results show, that by using the proposed product it is possible to give a more reliable picture on flood-affected areas in the frame of disaster response

    Global Flood Monitoring Webinar 2022: Products Outline

    Get PDF
    The Copernicus Emergency Management Service has been developing a new operational product providing a continuous global, systematic, and automated monitoring of all land surface areas possibly affected by flooding. This new global flood monitoring (GFM) product processes all incoming Sentinel-1 images and analyses them using an ensemble of 3 flood detection algorithms providing a high timeliness and quality of the product. The workshop, in the form of a webinar, will present the currently available data and product that have been developed as part of the GFM focusing on the high-resolution satellite-based products for flood monitoring at global scale, freely accessible in real-time through GloFAS

    Automatic near-real time flood extent and duration mapping based on multi-sensor Earth Observation data

    Get PDF
    In order to support disaster management activities related to flood situations, an automatic system for near-real time mapping of flood extent and duration using multi-sensor satellite data is developed. The system is based on four fully automatic processing chains for the derivation of the inundation extent from Sentinel-1 and TerraSAR-X radar as well as from optical Sentinel-2 and Landsat data. While the systematic acquisition plan of the Sentinel-1/2 and Landsat satellites allows a continuous monitoring of inundated areas at an interval of a few days, the TerraSAR-X processing chain has to be triggered on-demand over the disaster-affected areas. Beside flood extent masks, flood duration products are generated to indicate the temporal stability and evolution of flood events. The flood monitoring system is demonstrated on a severe flood situation in Mozambique related to cyclone Idai in 2019

    Sentinel-1-based water and flood mapping: benchmarking convolutional neural networks against an operational rule-based processing chain

    Get PDF
    In this study, the effectiveness of several convolutional neural network architectures (AlbuNet-34/FCN/DeepLabV3+/U-Net/U-Net++) for water and flood mapping using Sentinel-1 amplitude data is compared to an operational rule-based processor (S-1FS). This comparison is made using a globally distributed dataset of Sentinel-1 scenes and the corresponding ground truth water masks derived from Sentinel-2 data to evaluate the performance of the classifiers on a global scale in various environmental conditions. The impact of using single versus dual-polarized input data on the segmentation capabilities of AlbuNet-34 is evaluated. The weighted cross entropy loss is combined with the Lovász loss and various data augmentation methods are investigated. Furthermore, the concept of atrous spatial pyramid pooling used in DeepLabV3+ and the multiscale feature fusion inherent in U-Net++ are assessed. Finally, the generalization capacity of AlbuNet-34 is tested in a realistic flood mapping scenario by using additional data from two flood events and the Sen1Floods11 dataset. The model trained using dual polarized data outperforms the S-1FS significantly and increases the intersection over union (IoU) score by 5%. Using a weighted combination of the cross entropy and the Lovász loss increases the IoU score by another 2%. Geometric data augmentation degrades the performance while radiometric data augmentation leads to better testing results. FCN/DeepLabV3+/U-Net/U-Net++ perform not significantly different to AlbuNet-34. Models trained on data showing no distinct inundation perform very well in mapping the water extent during two flood events, reaching IoU scores of 0.96 and 0.94, respectively, and perform comparatively well on the Sen1Floods11 dataset
    • …
    corecore